Fixed-Weight Difference Target Propagation
نویسندگان
چکیده
Target Propagation (TP) is a biologically more plausible algorithm than the error backpropagation (BP) to train deep networks, and improving practicality of TP an open issue. methods require feedforward feedback networks form layer-wise autoencoders for propagating target values generated at output layer. However, this causes certain drawbacks; e.g., careful hyperparameter tuning required synchronize training, frequent updates path are usually that path. Learning sufficient make capable but having these necessary condition work? We answer question by presenting Fixed-Weight Difference (FW-DTP) keeps weights constant during training. confirmed simple method, which naturally resolves abovementioned problems TP, can still deliver informative hidden layers given task; indeed, FW-DTP consistently achieves higher test performance baseline, (DTP), on four classification datasets. also present novel propagation architecture explains exact function DTP analyze FW-DTP. Our code available https://github.com/TatsukichiShibuya/Fixed-Weight-Difference-Target-Propagation.
منابع مشابه
Difference Target Propagation
Back-propagation has been the workhorse of recent successes of deep learning but it relies on infinitesimal effects (partial derivatives) in order to perform credit assignment. This could become a serious issue as one considers deeper and more non-linear functions, e.g., consider the extreme case of nonlinearity where the relation between parameters and cost is actually discrete. Inspired by th...
متن کاملFixed Target Energies
We calculated the color-octet contribution to the J/ψ hadroproduction at fixed target energies (s) ≃ 40 GeV. We consider the J/ψ production with transverse momenta which can not be explained by primordial motion of par-tons, p T > 1.5 GeV. It is shown that color octet contribution is dominant at these energies and reduces large dicrepancies between experimental data and color singlet model pred...
متن کاملBack-Propagation Without Weight Transport
In back-propagation (Rumelhart et al, 1985) connection weights are used to both compute node activations and error gradients for hidden units. Grossberg (1987) has argued that the dual use of the same synaptic connections (“weight transport”) constitutes a bidirectional flow of information through synapses, which is biologically implausable. In this paper we formally and empirically demonstrate...
متن کاملFixed Point Solutions of Belief Propagation
Belief propagation (BP) is an iterative method to perform approximate inference on arbitrary graphical models. Whether BP converges and if the solution is a unique fixed point depends on both, the structure and the parametrization of the model. To understand this dependence it is interesting to find all fixed points. In this work, we formulate a set of polynomial equations, the solutions of whi...
متن کاملFourier finite - difference wave propagation a
We introduce a novel technique for seismic wave extrapolation in time. The technique involves cascading a Fourier Transform operator and a finite difference operator to form a chain operator: Fourier Finite Differences (FFD). We derive the FFD operator from a pseudo-analytical solution of the acoustic wave equation. 2-D synthetic examples demonstrate that the FFD operator can have high accuracy...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence
سال: 2023
ISSN: ['2159-5399', '2374-3468']
DOI: https://doi.org/10.1609/aaai.v37i8.26171